Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Free, publicly-accessible full text available April 25, 2026
- 
            Cyber-physical systems interact with the world through software controlling physical effectors. Carefully designed controllers, implemented as safety-critical control software, also interact with other parts of the software suite, and may be difficult to separate, verify, or maintain. Moreover, some software changes, not intended to impact control system performance, do change controller response through a variety of means including interaction with external libraries or unmodeled changes only existing in the cyber system (e.g., exception handling). As a result, identifying safety-critical control software, its boundaries with other embedded software in the system, and the way in which control software evolves could help developers isolate, test, and verify control implementation, and improve control software development. In this work we present an automated technique, based on a novel application of machine learning, to detect commits related to control software, its changes, and how the control software evolves. We leverage messages from developers (e.g., commit comments), and code changes themselves to understand how control software is refined, extended, and adapted over time. We examine three distinct, popular, real-world, safety-critical autopilots—ArduPilot, Paparazzi UAV, and LibrePilot to test our method demonstrating an effective detection rate of 0.95 for control-related code changes.more » « lessFree, publicly-accessible full text available October 31, 2025
- 
            Despite the availability of numerous automatic accessibility testing solutions, web accessibility issues persist on many websites. Moreover, there is a lack of systematic evaluations of the efficacy of current accessibility testing tools. To address this gap, we present the first mutation analysis framework, called Ma11y, designed to assess web accessibility testing tools. Ma11y includes 25 mutation operators that intentionally violate various accessibility principles and an automated oracle to determine whether a mutant is detected by a testing tool. Evaluation on real-world websites demonstrates the practical applicability of the mutation operators and the framework’s capacity to assess tool performance. Our results demonstrate that the current tools cannot identify nearly 50% of the accessibility bugs injected by our framework, thus underscoring the need for the development of more effective accessibility testing tools. Finally, the framework’s accuracy and performance attest to its potential for seamless and automated application in practical settingsmore » « less
- 
            null (Ed.)The tools and infrastructure used in tech, including Open Source Software (OSS), can embed “inclusivity bugs”— features that disproportionately disadvantage particular groups of contributors. To see whether OSS developers have existing practices to ward off such bugs, we surveyed 266 OSS developers. Our results show that a majority (77%) of developers do not use any inclusivity practices, and 92% of respondents cited a lack of concrete resources to enable them to do so. To help fill this gap, this paper introduces AID, a tool that automates the GenderMag method to systematically find gender-inclusivity bugs in software. We then present the results of the tool's evaluation on 20 GitHub projects. The tool achieved precision of 0.69, recall of 0.92, an F-measure of 0.79 and even captured some inclusivity bugs that human GenderMag teams missed.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available